Goto

Collaborating Authors

 Durham County


Trustworthy Feature Importance Avoids Unrestricted Permutations

Borgonovo, Emanuele, Cappelli, Francesco, Lu, Xuefei, Plischke, Elmar, Rudin, Cynthia

arXiv.org Machine Learning

Since their introduction by Breiman (2001), permutation-based feature importance measures have been widely adopted. However, randomly permuting the entries of a dataset may create new points far from the original data or even "impossible data." In a permuted dataset, we may find children who are retired or individuals who graduated from high school before they were born (Mase et al. 2022, p. 1). Forcing ML models to make predictions at these points causes them to extrapolate, making explanations unreliable (Hooker et al. 2021). Every non-trivial permutation-based variable importance measure, including SHAP (Lundberg and Lee 2017), Knockoffs (Barber and Candés 2015), conditional model reliance (Fisher et al. 2019), and accumulated local effect (ALE) plots (Apley and Zhu 2020) suffer from this. We propose and compare three new strategies to address extrapolation issues. The first combines conditional model reliance from Fisher et al. (2019) with a Gaussian transformation. By mapping data quantiles to a Gaussian distribution and back, we adjust only the quantiles of point values, significantly reducing extrapolation. Under a Gaussian copula assumption for the feature distribution, we prove that the new data points follow the same probability distribution as the original data.


Scalable Learning of Multivariate Distributions via Coresets

Ding, Zeyu, Ickstadt, Katja, Klein, Nadja, Munteanu, Alexander, Omlor, Simon

arXiv.org Machine Learning

Efficient and scalable non-parametric or semi-parametric regression analysis and density estimation are of crucial importance to the fields of statistics and machine learning. However, available methods are limited in their ability to handle large-scale data. We address this issue by developing a novel coreset construction for multivariate conditional transformation models (MCTMs) to enhance their scalability and training efficiency. To the best of our knowledge, these are the first coresets for semi-parametric distributional models. Our approach yields substantial data reduction via importance sampling. It ensures with high probability that the log-likelihood remains within multiplicative error bounds of $(1\pm\varepsilon)$ and thereby maintains statistical model accuracy. Compared to conventional full-parametric models, where coresets have been incorporated before, our semi-parametric approach exhibits enhanced adaptability, particularly in scenarios where complex distributions and non-linear relationships are present, but not fully understood. To address numerical problems associated with normalizing logarithmic terms, we follow a geometric approximation based on the convex hull of input data. This ensures feasible, stable, and accurate inference in scenarios involving large amounts of data. Numerical experiments demonstrate substantially improved computational efficiency when handling large and complex datasets, thus laying the foundation for a broad range of applications within the statistics and machine learning communities.